PAC-MDL Bounds
نویسندگان
چکیده
منابع مشابه
Computing Nonvacuous Generalization Bounds for Deep (Stochastic) Neural Networks with Many More Parameters than Training Data
One of the defining properties of deep learning is that models are chosen to have many more parameters than available training data. In light of this capacity for overfitting, it is remarkable that simple algorithms like SGD reliably return solutions with low test error. One roadblock to explaining these phenomena in terms of implicit regularization, structural properties of the solution, and/o...
متن کاملA Tight Excess Risk Bound via a Unified PAC-Bayesian-Rademacher-Shtarkov-MDL Complexity
We present a novel notion of complexity that interpolates between and generalizes some classic existing complexity notions in learning theory: for estimators like empirical risk minimization (ERM) with arbitrary bounded losses, it is upper bounded in terms of data-independent Rademacher complexity; for generalized Bayesian estimators, it is upper bounded by the data-dependent information comple...
متن کاملSome Improved Sample Complexity Bounds in the Probabilistic PAC Learning Model
1 I n t r o d u c t i o n Since Valiant's introduction of the PAC learning model [6] for boolean functions, several extensions of the model to the learning of probability distributions were made. Yamanishi [7] and Kearns and Schapire [3] considered the problem of learning stochastic rules (or probabillstic concepts), which is the problem of learning conditional distributions. Abe and Warmuth [1...
متن کاملAdvanced Course in Machine Learning
The problem of characterizing learnability is the most basic question of learning theory. A fundamental and long-standing answer, formally proven for supervised classification and regression, is that learnability is equivalent to uniform convergence, and that if a problem is learnable, it is learnable via empirical risk minimization. Furthermore, for the problem of binary classification, unifor...
متن کاملComputational Machine Learning in Theory and Praxis
In the last few decades a computational approach to machine learning has emerged based on paradigms from recursion theory and the theory of computation. Such ideas include learning in the limit, learning by enumeration, and probably approximately correct (pac) learning. These models usually are not suitable in practical situations. In contrast, statistics based inference methods have enjoyed a ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2003